perm filename POWER.NS[E82,JMC] blob
sn#673089 filedate 1982-08-12 generic text, type T, neo UTF8
n025 0857 12 Aug 82
BC-TECHNOLOGY
(BizDay)
By ANDREW POLLACK
c. 1982 N.Y. Times News Service
NEW YORK - Could computers have prevented the Three Mile Island
accident? The belief that they could has spurred development of
computer systems to assist in the operation of nuclear power plants
and in diagnosing emergencies.
The developments range from relatively simple systems that merely
display pertinent information on a computer screen, to more complex
systems that can diagnose an accident and suggest corrective actions.
Two ''expert systems'' for nuclear power plants, so called because
they mimic the thought process of human experts, will be described at
the National Conference on Artificial Intelligence in Pittsburgh next
week.
The Three Mile Island accident showed that control rooms in nuclear
plants were poorly designed for giving operators needed information,
especially in an emergency. There are thousands of dials, meters,
gauges and lights for the operator to read, and a malfunction causes
hundreds of lights to blink and alarms to sound.
At the time of the accident at Three Mile Island, in March 1979, the
control room displays contained all the information necessary to
determine the status of the plant, experts say. Yet the operators
could not sort it out fast enough and made a crucial error, according
to experts, that exacerbated the accident.
''There was plenty of information,'' said David G. Cain, program
manager for diagnostic instrumentation at the Electric Power Research
Institute's Nuclear Safety Analysis Center. ''Some would call it
information overload. Having a high-quality information system so
that operators could have seen the forest through the trees could
have prevented'' the accident, he added.
The Nuclear Regulatory Commission has reached a similar conclusion.
Last month it approved a rule requiring nuclear plants to install
safety display systems that cull the information most important for
plant safety and present it in a single place, generally on a
computerized video screen. The installation of early versions of such
systems has already begun.
While the commission requires that the systems merely display
information, some systems are analyzing it as well. A group of
electric companies using Westinghouse reactors has designed a system
that can advise an operator of the probability of a major problem in
the reactor. The Electric Power Research Institute, an organization
financed by electric utilities, is developing a system that could
help evaluate safety problems.
The government's Savannah River plant in Aiken, S.C., which makes
material for nuclear weapons, has a system that analyzes the pattern
of multiple alarms after a reactor malfunction and recommends the
procedures to follow. In Europe, a system called Halo (handling of
alarms with logic), is being developed. It suppresses all but the
most important alarm.
The so-called expert system, which would use what is called
artificial intelligence - to ''reason'' much like an operator can -
is in the future. Such systems are developed by interviewing experts
on a subject and incorporating the rules they use into a computer
program.
Two expert systems for nuclear plants will be discussed at the
artificial intelligence conference in Pittsburgh - one developed by
William E. Underwood, an assistant professor of computer science at
the Georgia Institute of Technology, and the other by William R.
Nelson of EG&G Idaho Inc., a research firm.
Nelson's system contains a collection of ''if-then'' statements
detailing relationships among parts of the plant. If the pressure in
a certain tank is falling, then flow of water might be inadequate, if
the flow of water is inadequate then a pipe might have broken, and so
on, to use some hypothetical examples.
The program can determine the cause of an accident, or, failing
that, the additional information needed to reach a conclusion and
then ask the operator to provide it. It can also determine alternate
steps to take in case of an accident.
Such systems are years away, however, except perhaps for training
purposes. The major reactor manufacturers and the Nuclear Regulatory
Commission have not yet seriously thought about such complex systems.
No one suggests that computers could operate plants without human
help, because the programs could not be designed for every
contingency. They are merely being considered as aids to the
operator. Nevertheless, there is concern that too much dependence
could be placed on the computer. ''One has to worry about who's in
control, the operator or the computer,'' said Leo Beltracchi, a human
factors engineer for the NRC.
Another concern is the possibility of errors in the computer's
programs. ''I don't want to be responsible for a piece of software
that causes $5 billion worth of damage,'' said John R. Gabriel, a
computer scientist at the Argonne National Laboratory.
But even a corret computer program that perfectly emulates human
experts can make mistakes, because the humans themselves make
mistakes. That in itself could prove a barrier to acceptance.
''What people have difficulty envisioning is a computer program that
can make a mistake without having a bug in it,'' said S. Jerrold
Kaplan, vice president for business development for Teknowledge Inc.,
a company in Palo Alto, Calif., that designs expert systems. ''People
are not yet ready to accept computers that can make mistakes.''
nyt-08-12-82 1154edt
***************